The ability to estimate epistemic uncertainty is often crucial when deploying machine learning in the real world, but modern methods often produce overconfident, uncalibrated uncertainty predictions. A common approach to quantify epistemic uncertainty, usable across a wide class of prediction models, is to train a model ensemble. In a naive implementation, the ensemble approach has high computational cost and high memory demand. This challenges in particular modern deep learning, where even a single deep network is already demanding in terms of compute and memory, and has given rise to a number of attempts to emulate the model ensemble without actually instantiating separate ensemble members. We introduce FiLM-Ensemble, a deep, implicit ensemble method based on the concept of Feature-wise Linear Modulation (FiLM). That technique was originally developed for multi-task learning, with the aim of decoupling different tasks. We show that the idea can be extended to uncertainty quantification: by modulating the network activations of a single deep network with FiLM, one obtains a model ensemble with high diversity, and consequently well-calibrated estimates of epistemic uncertainty, with low computational overhead in comparison. Empirically, FiLM-Ensemble outperforms other implicit ensemble methods, and it and comes very close to the upper bound of an explicit ensemble of networks (sometimes even beating it), at a fraction of the memory cost.
translated by 谷歌翻译
深度学习模式和地球观察的协同组合承诺支持可持续发展目标(SDGS)。新的发展和夸张的申请已经在改变人类将面临生活星球挑战的方式。本文审查了当前对地球观测数据的最深入学习方法,以及其在地球观测中深度学习的快速发展受到影响和实现最严重的SDG的应用。我们系统地审查案例研究至1)实现零饥饿,2)可持续城市,3)提供保管安全,4)减轻和适应气候变化,5)保留生物多样性。关注重要的社会,经济和环境影响。提前令人兴奋的时期即将到来,算法和地球数据可以帮助我们努力解决气候危机并支持更可持续发展的地方。
translated by 谷歌翻译
以知情方式监测和管理地球林是解决生物多样性损失和气候变化等挑战的重要要求。虽然森林评估的传统或空中运动提供了在区域一级分析的准确数据,但将其扩展到整个国家,以外的高度分辨率几乎不可能。在这项工作中,我们提出了一种贝叶斯深度学习方法,以10米的分辨率为全国范围的森林结构变量,使用自由可用的卫星图像作为输入。我们的方法将Sentinel-2光学图像和Sentinel-1合成孔径雷达图像共同变换为五种不同的森林结构变量的地图:95th高度百分位,平均高度,密度,基尼系数和分数盖。我们从挪威的41个机载激光扫描任务中培训和测试我们的模型,并证明它能够概括取消测试区域,从而达到11%和15%之间的归一化平均值误差,具体取决于变量。我们的工作也是第一个提出贝叶斯深度学习方法的工作,以预测具有良好校准的不确定性估计的森林结构变量。这些提高了模型的可信度及其适用于需要可靠的信心估计的下游任务,例如知情决策。我们提出了一组广泛的实验,以验证预测地图的准确性以及预测的不确定性的质量。为了展示可扩展性,我们为五个森林结构变量提供挪威地图。
translated by 谷歌翻译
美国宇航局的全球生态系统动力学调查(GEDI)是一个关键的气候使命,其目标是推进我们对森林在全球碳循环中的作用的理解。虽然GEDI是第一个基于空间的激光器,明确优化,以测量地上生物质的垂直森林结构预测,这对广泛的观测和环境条件的大量波形数据的准确解释是具有挑战性的。在这里,我们提出了一种新颖的监督机器学习方法来解释GEDI波形和全球标注冠层顶部高度。我们提出了一种基于深度卷积神经网络(CNN)集合的概率深度学习方法,以避免未知效果的显式建模,例如大气噪声。该模型学会提取概括地理区域的强大特征,此外,产生可靠的预测性不确定性估计。最终,我们模型产生的全球顶棚顶部高度估计估计的预期RMSE为2.7米,低偏差。
translated by 谷歌翻译
The well-documented presence of texture bias in modern convolutional neural networks has led to a plethora of algorithms that promote an emphasis on shape cues, often to support generalization to new domains. Yet, common datasets, benchmarks and general model selection strategies are missing, and there is no agreed, rigorous evaluation protocol. In this paper, we investigate difficulties and limitations when training networks with reduced texture bias. In particular, we also show that proper evaluation and meaningful comparisons between methods are not trivial. We introduce BiasBed, a testbed for texture- and style-biased training, including multiple datasets and a range of existing algorithms. It comes with an extensive evaluation protocol that includes rigorous hypothesis testing to gauge the significance of the results, despite the considerable training instability of some style bias methods. Our extensive experiments, shed new light on the need for careful, statistically founded evaluation protocols for style bias (and beyond). E.g., we find that some algorithms proposed in the literature do not significantly mitigate the impact of style bias at all. With the release of BiasBed, we hope to foster a common understanding of consistent and meaningful comparisons, and consequently faster progress towards learning methods free of texture bias. Code is available at https://github.com/D1noFuzi/BiasBed
translated by 谷歌翻译
世界上最大的可可生产国C \^ote d'Ivoire and Ghana占全球可可生产的三分之二。在这两个国家,可可都是多年生作物,为近200万农民提供收入。然而,缺少可可种植区域的精确地图,阻碍了保护区,生产和产量的准确量化,并限制了可用于改善可持续性治理的信息。在这里,我们将可可种植园数据与公开可用的卫星图像结合在深度学习框架中,并为两国的可可种植园创建高分辨率地图,并被现场验证。我们的结果表明,可可栽培是C \^ote d'Ivoire和Ghane的保护区中森林损失的37%以上和13%的潜在驱动因素,该官员报告大大低估了种植的地区,最高40%在加纳。这些地图是提高可可生产地区保护和经济发展的关键基础。
translated by 谷歌翻译
放射线学使用定量医学成像特征来预测临床结果。目前,在新的临床应用中,必须通过启发式试验和纠正过程手动完成各种可用选项的最佳放射组方法。在这项研究中,我们提出了一个框架,以自动优化每个应用程序的放射线工作流程的构建。为此,我们将放射线学作为模块化工作流程,并为每个组件包含大量的常见算法。为了优化每个应用程序的工作流程,我们使用随机搜索和结合使用自动化机器学习。我们在十二个不同的临床应用中评估我们的方法,从而在曲线下导致以下区域:1)脂肪肉瘤(0.83); 2)脱粘型纤维瘤病(0.82); 3)原发性肝肿瘤(0.80); 4)胃肠道肿瘤(0.77); 5)结直肠肝转移(0.61); 6)黑色素瘤转移(0.45); 7)肝细胞癌(0.75); 8)肠系膜纤维化(0.80); 9)前列腺癌(0.72); 10)神经胶质瘤(0.71); 11)阿尔茨海默氏病(0.87);和12)头颈癌(0.84)。我们表明,我们的框架具有比较人类专家的竞争性能,优于放射线基线,并且表现相似或优于贝叶斯优化和更高级的合奏方法。最后,我们的方法完全自动优化了放射线工作流的构建,从而简化了在新应用程序中对放射线生物标志物的搜索。为了促进可重复性和未来的研究,我们公开发布了六个数据集,框架的软件实施以及重现这项研究的代码。
translated by 谷歌翻译
Modeling lies at the core of both the financial and the insurance industry for a wide variety of tasks. The rise and development of machine learning and deep learning models have created many opportunities to improve our modeling toolbox. Breakthroughs in these fields often come with the requirement of large amounts of data. Such large datasets are often not publicly available in finance and insurance, mainly due to privacy and ethics concerns. This lack of data is currently one of the main hurdles in developing better models. One possible option to alleviating this issue is generative modeling. Generative models are capable of simulating fake but realistic-looking data, also referred to as synthetic data, that can be shared more freely. Generative Adversarial Networks (GANs) is such a model that increases our capacity to fit very high-dimensional distributions of data. While research on GANs is an active topic in fields like computer vision, they have found limited adoption within the human sciences, like economics and insurance. Reason for this is that in these fields, most questions are inherently about identification of causal effects, while to this day neural networks, which are at the center of the GAN framework, focus mostly on high-dimensional correlations. In this paper we study the causal preservation capabilities of GANs and whether the produced synthetic data can reliably be used to answer causal questions. This is done by performing causal analyses on the synthetic data, produced by a GAN, with increasingly more lenient assumptions. We consider the cross-sectional case, the time series case and the case with a complete structural model. It is shown that in the simple cross-sectional scenario where correlation equals causation the GAN preserves causality, but that challenges arise for more advanced analyses.
translated by 谷歌翻译
Deep learning models are known to put the privacy of their training data at risk, which poses challenges for their safe and ethical release to the public. Differentially private stochastic gradient descent is the de facto standard for training neural networks without leaking sensitive information about the training data. However, applying it to models for graph-structured data poses a novel challenge: unlike with i.i.d. data, sensitive information about a node in a graph cannot only leak through its gradients, but also through the gradients of all nodes within a larger neighborhood. In practice, this limits privacy-preserving deep learning on graphs to very shallow graph neural networks. We propose to solve this issue by training graph neural networks on disjoint subgraphs of a given training graph. We develop three random-walk-based methods for generating such disjoint subgraphs and perform a careful analysis of the data-generating distributions to provide strong privacy guarantees. Through extensive experiments, we show that our method greatly outperforms the state-of-the-art baseline on three large graphs, and matches or outperforms it on four smaller ones.
translated by 谷歌翻译
Data-driven models such as neural networks are being applied more and more to safety-critical applications, such as the modeling and control of cyber-physical systems. Despite the flexibility of the approach, there are still concerns about the safety of these models in this context, as well as the need for large amounts of potentially expensive data. In particular, when long-term predictions are needed or frequent measurements are not available, the open-loop stability of the model becomes important. However, it is difficult to make such guarantees for complex black-box models such as neural networks, and prior work has shown that model stability is indeed an issue. In this work, we consider an aluminum extraction process where measurements of the internal state of the reactor are time-consuming and expensive. We model the process using neural networks and investigate the role of including skip connections in the network architecture as well as using l1 regularization to induce sparse connection weights. We demonstrate that these measures can greatly improve both the accuracy and the stability of the models for datasets of varying sizes.
translated by 谷歌翻译